186 research outputs found

    3D deep convolutional neural network-based ventilated lung segmentation using multi-nuclear hyperpolarized gas MRI

    Get PDF
    Hyperpolarized gas MRI enables visualization of regional lung ventilation with high spatial resolution. Segmentation of the ventilated lung is required to calculate clinically relevant biomarkers. Recent research in deep learning (DL) has shown promising results for numerous segmentation problems. In this work, we evaluate a 3D V-Net to segment ventilated lung regions on hyperpolarized gas MRI scans. The dataset consists of 743 helium-3 (3He) or xenon-129 (129Xe) volumetric scans and corresponding expert segmentations from 326 healthy subjects and patients with a wide range of pathologies. We evaluated segmentation performance for several DL experimental methods via overlap, distance and error metrics and compared them to conventional segmentation methods, namely, spatial fuzzy c-means (SFCM) and K-means clustering. We observed that training on combined 3He and 129Xe MRI scans outperformed other DL methods, achieving a mean ± SD Dice of 0.958 ± 0.022, average boundary Hausdorff distance of 2.22 ± 2.16 mm, Hausdorff 95th percentile of 8.53 ± 12.98 mm and relative error of 0.087 ± 0.049. Moreover, no difference in performance was observed between 129Xe and 3He scans in the testing set. Combined training on 129Xe and 3He yielded statistically significant improvements over the conventional methods (p < 0.0001). The DL approach evaluated provides accurate, robust and rapid segmentations of ventilated lung regions and successfully excludes non-lung regions such as the airways and noise artifacts and is expected to eliminate the need for, or significantly reduce, subsequent time-consuming manual editing

    HeMIS: Hetero-Modal Image Segmentation

    Full text link
    We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities. Instead of attempting to impute or synthesize missing data, the proposed approach learns, for each modality, an embedding of the input image into a single latent vector space for which arithmetic operations (such as taking the mean) are well defined. Points in that space, which are averaged over modalities available at inference time, can then be further processed to yield the desired segmentation. As such, any combinatorial subset of available modalities can be provided as input, without having to learn a combinatorial number of imputation models. Evaluated on two neurological MRI datasets (brain tumors and MS lesions), the approach yields state-of-the-art segmentation results when provided with all modalities; moreover, its performance degrades remarkably gracefully when modalities are removed, significantly more so than alternative mean-filling or other synthesis approaches.Comment: Accepted as an oral presentation at MICCAI 201

    Shallow vs deep learning architectures for white matter lesion segmentation in the early stages of multiple sclerosis

    Get PDF
    In this work, we present a comparison of a shallow and a deep learning architecture for the automated segmentation of white matter lesions in MR images of multiple sclerosis patients. In particular, we train and test both methods on early stage disease patients, to verify their performance in challenging conditions, more similar to a clinical setting than what is typically provided in multiple sclerosis segmentation challenges. Furthermore, we evaluate a prototype naive combination of the two methods, which refines the final segmentation. All methods were trained on 32 patients, and the evaluation was performed on a pure test set of 73 cases. Results show low lesion-wise false positives (30%) for the deep learning architecture, whereas the shallow architecture yields the best Dice coefficient (63%) and volume difference (19%). Combining both shallow and deep architectures further improves the lesion-wise metrics (69% and 26% lesion-wise true and false positive rate, respectively).Comment: Accepted to the MICCAI 2018 Brain Lesion (BrainLes) worksho

    Automatic Segmentation of Muscle Tissue and Inter-muscular Fat in Thigh and Calf MRI Images

    Full text link
    Magnetic resonance imaging (MRI) of thigh and calf muscles is one of the most effective techniques for estimating fat infiltration into muscular dystrophies. The infiltration of adipose tissue into the diseased muscle region varies in its severity across, and within, patients. In order to efficiently quantify the infiltration of fat, accurate segmentation of muscle and fat is needed. An estimation of the amount of infiltrated fat is typically done visually by experts. Several algorithmic solutions have been proposed for automatic segmentation. While these methods may work well in mild cases, they struggle in moderate and severe cases due to the high variability in the intensity of infiltration, and the tissue's heterogeneous nature. To address these challenges, we propose a deep-learning approach, producing robust results with high Dice Similarity Coefficient (DSC) of 0.964, 0.917 and 0.933 for muscle-region, healthy muscle and inter-muscular adipose tissue (IMAT) segmentation, respectively.Comment: 9 pages, 4 figures, 2 tables, MICCAI 2019, the 22nd International Conference on Medical Image Computing and Computer Assisted Interventio

    Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE

    Full text link
    Probabilistic modelling has been an essential tool in medical image analysis, especially for analyzing brain Magnetic Resonance Images (MRI). Recent deep learning techniques for estimating high-dimensional distributions, in particular Variational Autoencoders (VAEs), opened up new avenues for probabilistic modeling. Modelling of volumetric data has remained a challenge, however, because constraints on available computation and training data make it difficult effectively leverage VAEs, which are well-developed for 2D images. We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices. We do so by estimating the sample mean and covariance in the latent space of the 2D model over the slice direction. This combined model lets us sample new coherent stacks of latent variables to decode into slices of a volume. We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy. We demonstrate that our proposed model is competitive in generating high quality volumes at high resolutions according to both traditional metrics and our proposed evaluation.Comment: accepted for publication at MICCAI 2020. Code available https://github.com/voanna/slices-to-3d-brain-vae

    Surface agnostic metrics for cortical volume segmentation and regression

    Get PDF
    The cerebral cortex performs higher-order brain functions and is thus implicated in a range of cognitive disorders. Current analysis of cortical variation is typically performed by fitting surface mesh models to inner and outer cortical boundaries and investigating metrics such as surface area and cortical curvature or thickness. These, however, take a long time to run, and are sensitive to motion and image and surface resolution, which can prohibit their use in clinical settings. In this paper, we instead propose a machine learning solution, training a novel architecture to predict cortical thickness and curvature metrics from T2 MRI images, while additionally returning metrics of prediction uncertainty. Our proposed model is tested on a clinical cohort (Down Syndrome) for which surface-based modelling often fails. Results suggest that deep convolutional neural networks are a viable option to predict cortical metrics across a range of brain development stages and pathologies

    Brain Tumor Segmentation from Multi-Spectral MR Image Data Using Random Forest Classifier

    Get PDF
    The development of brain tumor segmentation techniques based on multi-spectral MR image data has relevant impact on the clinical practice via better diagnosis, radiotherapy planning and follow-up studies. This task is also very challenging due to the great variety of tumor appearances, the presence of several noise effects, and the differences in scanner sensitivity. This paper proposes an automatic procedure trained to distinguish gliomas from normal brain tissues in multi-spectral MRI data. The procedure is based on a random forest (RF) classifier, which uses 80 computed features beside the four observed ones, including morphological ones, gradients, and Gabor wavelet features. The intermediary segmentation outcome provided by the RF is fed to a twofold post-processing, which regularizes the shape of detected tumors and enhances the segmentation accuracy. The performance of the procedure was evaluated using the 274 records of the BraTS 2015 train data set. The achieved overall Dice scores between 85-86% represent highly accurate segmentation

    Automatic Tissue Segmentation with Deep Learning in Patients with Congenital or Acquired Distortion of Brain Anatomy

    Full text link
    Brains with complex distortion of cerebral anatomy present several challenges to automatic tissue segmentation methods of T1-weighted MR images. First, the very high variability in the morphology of the tissues can be incompatible with the prior knowledge embedded within the algorithms. Second, the availability of MR images of distorted brains is very scarce, so the methods in the literature have not addressed such cases so far. In this work, we present the first evaluation of state-of-the-art automatic tissue segmentation pipelines on T1-weighted images of brains with different severity of congenital or acquired brain distortion. We compare traditional pipelines and a deep learning model, i.e. a 3D U-Net trained on normal-appearing brains. Unsurprisingly, traditional pipelines completely fail to segment the tissues with strong anatomical distortion. Surprisingly, the 3D U-Net provides useful segmentations that can be a valuable starting point for manual refinement by experts/neuroradiologists

    3D U-Net Based Brain Tumor Segmentation and Survival Days Prediction

    Full text link
    Past few years have witnessed the prevalence of deep learning in many application scenarios, among which is medical image processing. Diagnosis and treatment of brain tumors requires an accurate and reliable segmentation of brain tumors as a prerequisite. However, such work conventionally requires brain surgeons significant amount of time. Computer vision techniques could provide surgeons a relief from the tedious marking procedure. In this paper, a 3D U-net based deep learning model has been trained with the help of brain-wise normalization and patching strategies for the brain tumor segmentation task in the BraTS 2019 competition. Dice coefficients for enhancing tumor, tumor core, and the whole tumor are 0.737, 0.807 and 0.894 respectively on the validation dataset. These three values on the test dataset are 0.778, 0.798 and 0.852. Furthermore, numerical features including ratio of tumor size to brain size and the area of tumor surface as well as age of subjects are extracted from predicted tumor labels and have been used for the overall survival days prediction task. The accuracy could be 0.448 on the validation dataset, and 0.551 on the final test dataset.Comment: Third place award of the 2019 MICCAI BraTS challenge survival task [BraTS 2019](https://www.med.upenn.edu/cbica/brats2019.html
    corecore